7,426 research outputs found

    Mesh refinement in finite element analysis by minimization of the stiffness matrix trace

    Get PDF
    Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution

    Visuomotor Transformation in the Fly Gaze Stabilization System

    Get PDF
    For sensory signals to control an animal's behavior, they must first be transformed into a format appropriate for use by its motor systems. This fundamental problem is faced by all animals, including humans. Beyond simple reflexes, little is known about how such sensorimotor transformations take place. Here we describe how the outputs of a well-characterized population of fly visual interneurons, lobula plate tangential cells (LPTCs), are used by the animal's gaze-stabilizing neck motor system. The LPTCs respond to visual input arising from both self-rotations and translations of the fly. The neck motor system however is involved in gaze stabilization and thus mainly controls compensatory head rotations. We investigated how the neck motor system is able to selectively extract rotation information from the mixed responses of the LPTCs. We recorded extracellularly from fly neck motor neurons (NMNs) and mapped the directional preferences across their extended visual receptive fields. Our results suggest that—like the tangential cells—NMNs are tuned to panoramic retinal image shifts, or optic flow fields, which occur when the fly rotates about particular body axes. In many cases, tangential cells and motor neurons appear to be tuned to similar axes of rotation, resulting in a correlation between the coordinate systems the two neural populations employ. However, in contrast to the primarily monocular receptive fields of the tangential cells, most NMNs are sensitive to visual motion presented to either eye. This results in the NMNs being more selective for rotation than the LPTCs. Thus, the neck motor system increases its rotation selectivity by a comparatively simple mechanism: the integration of binocular visual motion information

    On Compact Routing for the Internet

    Full text link
    While there exist compact routing schemes designed for grids, trees, and Internet-like topologies that offer routing tables of sizes that scale logarithmically with the network size, we demonstrate in this paper that in view of recent results in compact routing research, such logarithmic scaling on Internet-like topologies is fundamentally impossible in the presence of topology dynamics or topology-independent (flat) addressing. We use analytic arguments to show that the number of routing control messages per topology change cannot scale better than linearly on Internet-like topologies. We also employ simulations to confirm that logarithmic routing table size scaling gets broken by topology-independent addressing, a cornerstone of popular locator-identifier split proposals aiming at improving routing scaling in the presence of network topology dynamics or host mobility. These pessimistic findings lead us to the conclusion that a fundamental re-examination of assumptions behind routing models and abstractions is needed in order to find a routing architecture that would be able to scale ``indefinitely.''Comment: This is a significantly revised, journal version of cs/050802

    Creating An Executive Compensation Plan: A Corporate Tax Planning Case

    Get PDF
    In this case, the student takes on the role of a compensation committee team member in a publicly-traded corporation. The case task is to create a compensation plan for the company’s CEO using past data regarding CEO compensation and specific incentives from proxy statements, along with financial statement performance data. The case requires the student consider multiple issues, both tax and non-tax. To develop a comprehensive plan for the executive, students must incorporate numerous course topics and apply them to the particular fact pattern. Pre- and post-case responses from students confirm that this case furthers their understanding of the interplay between corporate tax rules and executive compensation while cultivating critical thinking skills. The case has been used in both graduate tax and graduate accounting courses. To accommodate different teaching philosophies and course emphasis (tax or accounting), three variations of the project are provided

    Mitigating sampling error when measuring internet client IPv6 capabilities

    Get PDF
    Despite the predicted exhaustion of unallocated IPv4 addresses be- tween 2012 and 2014, it remains unclear how many current clients can use its successor, IPv6, to access the Internet. We propose a refinement of previous measurement studies that mitigates intrin- sic measurement biases, and demonstrate a novel web-based tech- nique using Google ads to perform IPv6 capability testing on a wider range of clients. After applying our sampling error reduction, we find that 6% of world-wide connections are from IPv6-capable clients, but only 1–2% of connections preferred IPv6 in dual-stack (dual-stack failure rates less than 1%). Except for an uptick around IPv6-day 2011 these proportions were relatively constant, while the percentage of connections with IPv6-capable DNS resolvers has in- creased to nearly 60%. The percentage of connections from clients with native IPv6 using happy eyeballs has risen to over 20

    Electronic contribution to the oscillations of a gravitational antenna

    Full text link
    We carefully analyse the contribution to the oscillations of a metallic gravitational antenna due to the interaction between the electrons of the bar and the incoming gravitational wave. To this end, we first derive the total microscopic Hamiltonian of the wave-antenna system and then compute the contribution to the attenuation factor due to the electron-graviton interaction. As compared to the ordinary damping factor, which is due to the electron viscosity, this term turns out to be totally negligible. This result confirms that the only relevant mechanism for the interaction of a gravitational wave with a metallic antenna is its direct coupling with the bar normal modes.Comment: 25 pages, no figure

    Some Considerations in Optimizing the Medical Physics Match

    Get PDF
    The Medical Physics Match has proven its usefulness to the AAPM community, but it is not universally utilized for a variety of reasons. This invited guest editorial explores the scholarly history of the match algorithm and suggests some avenues to optimize its future use

    Finite-element grid improvement by minimization of stiffness matrix trace

    Get PDF
    A new and simple method of finite-element grid improvement is presented. The objective is to improve the accuracy of the analysis. The procedure is based on a minimization of the trace of the stiffness matrix. For a broad class of problems this minimization is seen to be equivalent to minimizing the potential energy. The method is illustrated with the classical tapered bar problem examined earlier by Prager and Masur. Identical results are obtained

    The impact of new neutrino DIS and Drell-Yan data on large-x parton distributions

    Get PDF
    New data sets have recently become available for neutrino and antineutrino deep inelastic scattering on nuclear targets and for inclusive dimuon production in pp pd interactions. These data sets are sensitive to different combinations of parton distribution functions in the large-x region and, therefore, provide different constraints when incorporated into global parton distribution function fits. We compare and contrast the effects of these new data on parton distribution fits, with special emphasis on the effects at large x. The effects of the use of nuclear targets in the neutrino and antineutrino data sets are also investigated.Comment: 24 pages, 13 figure
    corecore